Learning functions and their derivatives using Taylor series and neural networks
نویسندگان
چکیده
This paper describes a design based o n the Taylor series to approximate a function and its derivatives. A f ter being trained, derivatives are obtained in a fast feed forward evaluation without the need fo r backpropagation or fornard perturbation. The Taylor network is basically a n implementation of the Taylor series of a function. However, instead of only having one expansion point, it uses a function of expansion points and takes account of the order of the Taylor series by biasing individual t e r n s of the Taylor series. A simple learning algorithm is given and demonstrated with a simple experiment t o learn a sinusoid and its first derivative. Introduction, Motivation Neural networks are widely used to approximate function mappings given through data representation of the function. Often it is desirable to have derivatives of this function. While the first derivative can often be calculated efficiently through backpropagation, there are cases, where either second derivatives are necessary, or, the backpropagation cannot be applied. For the latter case, forward perturbation could be applied, however, its computational cost is much higher than that of backpropagation. The development of the Taylor network is motivated by the desire for fast access to the derivatives of a function, e.g. in Adaptive Critic Designs as proposed by Werbos [l] and Prokhorov [2]. There the Taylor network could be used to combine the cost-to-go function approximation (HDP, Heuristic Dual Programming) with derivative critics (DHP, Dual Heuristic Programming) towards GDHP (Global Dual Heuristic Programming). 0-7803-5529-6/99/$10.00 01999 IEEE The idea behind the Taylor network is to use the Taylor series expansion of the function f, but instead of just using one expansion point xo the individual terms of the Taylor series are seen as some other functions of the expansion point itself. These other functions can be approximated by individual neural networks. Learning the goal function f , involves training the neural networks to learn the derivatives as a function of the input values x , and expansion points xo chosen close to x . Design of a Taylor Network Instead of learning a function f ( x ) given by some data D = {i = 1..N : ( x i , 4) ) directly, f ( x ) is developed into a Taylor series using some developing points {k = 1..K : XO,k} which can be any points; but preferably related to the given data D, e.g. if D is clustered in the input space X around some x,, any close points xc,k would be Well Suited for the xO,k. For the sake of notational clarity it is assumed that f ( x ) and x and xo are scalars. However, an extension to vectors can be carried out using tensor-formulation. In the context of neural networks, f ( x ) can still be scalar without loss of generality but x and xo are supposed to be vectors. In this case f(l)(so) and f ( * ) ( x o ) denote the Jacobian and Hessian o f f at X O , respectively. The true” underlying data generating process f(x) which can only be experienced through the observed data is denoted by f or f(z), and the function approximating f is denoted by f. Therefore, f is accessible whereas f is not. The Taylor series of f is given by equation (l), where
منابع مشابه
تخمین ظرفیت تبادل کاتیونی خاک با استفاده از رگرسیون و شبکه عصبی و اثر تفکیک دادهها بر دقت و صحت توابع
Soil fertility measures such as cation exchange capacity (CEC) may be used in upgrading soil maps and improving their quality. Direct measurement of CEC is costly and laborious. Indirect estimation of CEC via pedotransfer functions, therefore, may be appropriate and effective. Several delineations of two consociation map units consisting of two soil families, Shahrak series and Chaharmahal seri...
متن کاملGyroscope Random Drift Modeling, using Neural Networks, Fuzzy Neural and Traditional Time- series Methods
In this paper statistical and time series models are used for determining the random drift of a dynamically Tuned Gyroscope (DTG). This drift is compensated with optimal predictive transfer function. Also nonlinear neural-network and fuzzy-neural models are investigated for prediction and compensation of the random drift. Finally the different models are compared together and their advantages a...
متن کاملUsing the Taylor expansion of multilayer feedforward neural networks
The Taylor series expansion of continuous functions has shown in many fields to be an extremely powerful tool to study the characteristics of such functions. This paper illustrates the power of the Taylor series expansion of multilayer feedforward neural networks. The paper shows how these expansions can be used to investigate positions of decision boundaries, to develop active learning strateg...
متن کاملInvestigating the performance of machine learning-based methods in classroom reverberation time estimation using neural networks (Research Article)
Classrooms, as one of the most important educational environments, play a major role in the learning and academic progress of students. reverberation time, as one of the most important acoustic parameters inside rooms, has a significant effect on sound quality. The inefficiency of classical formulas such as Sabin, caused this article to examine the use of machine learning methods as an alternat...
متن کاملNUMERICAL APPROACH TO SOLVE SINGULAR INTEGRAL EQUATIONS USING BPFS AND TAYLOR SERIES EXPANSION
In this paper, we give a numerical approach for approximating the solution of second kind Volterra integral equation with Logarithmic kernel using Block Pulse Functions (BPFs) and Taylor series expansion. Also, error analysis shows efficiency and applicability of the presented method. Finally, some numerical examples with exact solution are given.
متن کامل